FROM THE EDITOR
This week we delve into
the exotic and mysterious world of supercomputing with a look at the
new Cray XD1, where FPGAs as reconfigurable compute engines are gaining a foothold as key enablers
in accelerating performance-critical algorithms. With a little time
investment to move intensive data crunching into FPGA hardware, we
can gain several orders of magnitude of compute performance over
conventional processor-based approaches.
Our new contributed
article this week is from Lauro Rizzatti at Emulation and
Verification Engineering (EVE) on leveraging FPGA-based verification
systems to catch bugs in complex system-on-chip (SoC) designs. With
FPGAs accelerating the verification process, you can gain a
significant advantage in debugging embedded software and
hardware-software interfaces as well as dramatically increasing the
number of vectors you can pump through your hardware
design.
Thanks for reading! If
there's anything we can do to make our publications more useful to
you, please let us know at: comments@fpgajournal.com
Kevin Morris –
Editor FPGA and Programmable Logic
Journal |
|
|
Cray Goes FPGA
Algorithm Acceleration in the New XD1
When
I was in college, I knew the future of supercomputing. The supercomputers
of the 21 st century would be massive, gleaming masterpieces of
technology. They would not be installed into buildings, but rather
buildings would be designed and constructed around them - particularly to
house the cooling systems. The design specifics were fuzzy, but I was
reasonably sure that very low temperatures would be involved for either
superconducting connectivity, SQUIDs, or Josephson junction-esque
switching. Silicon would certainly have been long abandoned in favor of
Gallium Arsenide or some even more exotic semiconductor material. I
believed that Cray, Inc., as the preeminent developer of supercomputers,
would be able to leverage these techniques to gain perhaps a full order of
magnitude of computing performance over the machines of the day.
A
few years later, when Xilinx rolled out their first FPGAs, I could see the
future of that technology as well. FPGAs would act as a sort of
system-level silicon super-glue, sitting at the periphery of the circuit
board and stitching together incompatible protocols. With the simple
addition of an FPGA, anything could be made to connect to anything else,
and programmability insured that we could adapt on the fly and change our
design to leverage any new, improved component without having to abandon
the rest of our legacy design.
As I gazed into my crystal ball
(looking way out past the distorted reflection of my feathered hair,
Lacoste polo shirt, and wayfarer sunglasses), I could not envision any
connection between these two seemingly unrelated technology tracks.
Supercomputers would be designed and built from the ground up, using
carefully matched and optimized homogenous components, while FPGAs would
be the duct-tape of electronics design, helping to hold together aging
multi-generational systems for a few more years of life in the field
before they were retired altogether. In my crystal ball, the two paths
were obviously diverging.
I was right about the Cray part. [more]
|
The Real Fear
Factor by Lauro Rizzatti, EVE-USA Emulation and
Verification Engineering (EVE)
Dealing with mass quantities of unsavory bugs is
commonplace in both reality TV and modern-day chip design. And, the "fear
factor" for both is the same, too: failure to do so in the time specified
can put an end to a contestant’s 15 minutes of fame or cost a designer his
or her job.
In the era of the system on chip (SoC), dealing with
tens of millions of gates is a given for hardware design teams, but the
time-to-market game is getting more complex and fraught with danger due to
the explosion in embedded software. According to several design teams in
the midst of budgeting for their next design, the software portion of an
SoC is on an annual growth rate of 140 percent. Hardware is expanding just
40 percent year to year.
If this is today's reality, how are
design teams able to keep up? And, are there any more practical ways to
debug and verify embedded parts of the design before the hardware is
done?
In this new reality, there are three alternatives. The first
is a software-based development environment that models hardware in C at a
high level of abstraction, or above the register transfer level (RTL)
level. While it's fast, it suffers two drawbacks. First, creating the
model is time consuming and access to a complete library of ready-made
models is beyond reach. Second, it is not suitable to verify the
integration between the embedded software and the underling SoC hardware
because of the difficulty of accurately modeling hardware at this high
level of abstraction. [more]
|